4 research outputs found

    SelfClean: A Self-Supervised Data Cleaning Strategy

    Full text link
    Most benchmark datasets for computer vision contain irrelevant images, near duplicates, and label errors. Consequently, model performance on these benchmarks may not be an accurate estimate of generalization capabilities. This is a particularly acute concern in computer vision for medicine where datasets are typically small, stakes are high, and annotation processes are expensive and error-prone. In this paper we propose SelfClean, a general procedure to clean up image datasets exploiting a latent space learned with self-supervision. By relying on self-supervised learning, our approach focuses on intrinsic properties of the data and avoids annotation biases. We formulate dataset cleaning as either a set of ranking problems, which significantly reduce human annotation effort, or a set of scoring problems, which enable fully automated decisions based on score distributions. We demonstrate that SelfClean achieves state-of-the-art performance in detecting irrelevant images, near duplicates, and label errors within popular computer vision benchmarks, retrieving both injected synthetic noise and natural contamination. In addition, we apply our method to multiple image datasets and confirm an improvement in evaluation reliability

    Deep learning in clinical dermatology

    No full text
    The prevalence of skin diseases is high. A recent survey reported that half of the European population was afflicted with skin conditions. However, the resulting demand for dermatological care cannot be met satisfactorily because of a general shortage of dermatologists that will realistically not be filled by the healthcare sector. Alternative solutions should therefore be pursued to increase the capacities of the current healthcare workforce. The recent progress of machine vision enabled by deep learning has allowed researchers to automate parts of dermatologists' workflow with an effective scale-up potential. In this work, we present different approaches based on deep learning that either include aspects of dermatologists' workflow or whose predictions can easily be verified by clinicians. We propose a method for the generation of anatomical maps from patient photographs to assist dermatologists with lesion documentation and enable lesion detection and segmentation systems to stratify their predictions anatomically. Based on key features from lesion dermatological description, we develop an approach for the differential diagnosis of skin diseases. To enable objective severity assessment, we propose a method for the segmentation and quantification of palmoplantar pustular psoriasis, ichthyosis with confetti and hand eczema. Combined with the anatomy approach, we generate the anatomical stratification of hand eczema lesions. To concretize our research efforts, we present an African teledermatology initiative aiming to provide semi-automatic triage of the six most prevalent local skin diseases. Finally, we introduce our framework to enable researchers with medical background to train and evaluate deep learning models

    Quantification of Efflorescences in Pustular Psoriasis Using Deep Learning

    No full text
    Objectives: Pustular psoriasis (PP) is one of the most severe and chronic skin conditions. Its treatment is difficult, and measurements of its severity are highly dependent on clinicians’ experience. Pustules and brown spots are the main efflorescences of the disease and directly correlate with its activity. We propose an automated deep learning model (DLM) to quantify lesions in terms of count and surface percentage from patient photographs. Methods: In this retrospective study, two dermatologists and a student labeled 151 photographs of PP patients for pustules and brown spots. The DLM was trained and validated with 121 photographs, keeping 30 photographs as a test set to assess the DLM performance on unseen data. We also evaluated our DLM on 213 unstandardized, out-of-distribution photographs of various pustular disorders (referred to as the pustular set), which were ranked from 0 (no disease) to 4 (very severe) by one dermatologist for disease severity. The agreement between the DLM predictions and experts’ labels was evaluated with the intraclass correlation coefficient (ICC) for the test set and Spearman correlation (SC) coefficient for the pustular set. Results: On the test set, the DLM achieved an ICC of 0.97 (95% confidence interval [CI], 0.97–0.98) for count and 0.93 (95% CI, 0.92–0.94) for surface percentage. On the pustular set, the DLM reached a SC coefficient of 0.66 (95% CI, 0.60–0.74) for count and 0.80 (95% CI, 0.75–0.83) for surface percentage. Conclusions: The proposed method quantifies efflorescences from PP photographs reliably and automatically, enabling a precise and objective evaluation of disease activity.ISSN:2093-3681ISSN:2093-369

    Improved diagnosis by automated macro- and micro-anatomical region mapping of skin photographs

    No full text
    Background The exact location of skin lesions is key in clinical dermatology. On one hand, it supports differential diagnosis (DD) since most skin conditions have specific predilection sites. On the other hand, location matters for dermatosurgical interventions. In practice, lesion evaluation is not well standardized and anatomical descriptions vary or lack altogether. Automated determination of anatomical location could benefit both situations. Objective Establish an automated method to determine anatomical regions in clinical patient pictures and evaluate the gain in DD performance of a deep learning model (DLM) when trained with lesion locations and images. Methods Retrospective study based on three datasets: macro-anatomy for the main body regions with 6000 patient pictures partially labelled by a student, micro-anatomy for the ear region with 182 pictures labelled by a student and DD with 3347 pictures of 16 diseases determined by dermatologists in clinical settings. For each dataset, a DLM was trained and evaluated on an independent test set. The primary outcome measures were the precision and sensitivity with 95% CI. For DD, we compared the performance of a DLM trained with lesion pictures only with a DLM trained with both pictures and locations. Results The average precision and sensitivity were 85% (CI 84–86), 84% (CI 83–85) for macro-anatomy, 81% (CI 80–83), 80% (CI 77–83) for micro-anatomy and 82% (CI 78–85), 81% (CI 77–84) for DD. We observed an improvement in DD performance of 6% (McNemar test P-value 0.0009) for both average precision and sensitivity when training with both lesion pictures and locations. Conclusion Including location can be beneficial for DD DLM performance. The proposed method can generate body region maps from patient pictures and even reach surgery relevant anatomical precision, e.g. the ear region. Our method enables automated search of large clinical databases and make targeted anatomical image retrieval possible.ISSN:0926-9959ISSN:1468-308
    corecore